Goto

Collaborating Authors

 jack dorsey


Current and former Block workers say AI can't do their jobs after Jack Dorsey's mass layoffs: 'You can't really AI that'

The Guardian

CEO Jack Dorsey being interviewed on the floor of the New York Stock Exchange on 19 November 2015. CEO Jack Dorsey being interviewed on the floor of the New York Stock Exchange on 19 November 2015. Current and former Block workers say AI can't do their jobs after Jack Dorsey's mass layoffs: 'You can't really AI that' The CEO said he cut the company's workforce by 4,000 people - almost in half - because of gains in AI productivity M ark remembers the first time he wondered whether he was teaching Block's AI tools how to do his job - and maybe even replace him. He was at his fintech company's extravagant anniversary party last September. As executives led a presentation on the productivity benefits of a new internal AI tool, Mark, who worked in the product department, discussed his worries with colleagues. While he wasn't sure what would happen in a few years, he told a co-worker sitting next to him that for now, there was no way the technology was so advanced that it could move the business forward without employees like him to help drive vision and strategy.


What was really behind Jack Dorsey laying off nearly half of Block's staff?

The Guardian

Jack Dorsey leaves the Élysée Palace in Paris, France, on 7 June 2019. Jack Dorsey leaves the Élysée Palace in Paris, France, on 7 June 2019. What was really behind Jack Dorsey laying off nearly half of Block's staff? Jack Dorsey cited AI as the driving force behind cutting 40% of his company's employees, but other factors such as a weak crypto market, overstaffing and a declining stock price may also have motivated the move. Last week, the financial technology company Block announced that it would lay off 4,000 of its 10,000 workers.


Inside the Rolling Layoffs at Jack Dorsey's Block

WIRED

Workers describe a deteriorating culture at Block, the company behind Square and Cash App, where layoffs continue and employees are expected to use AI tools daily. After hundreds of workers were laid off in early February from Jack Dorsey's Block, some of the people remaining at the company say the internal culture has devolved to a point where performance anxiety is running rampant, using generative AI is required, and overall morale is rapidly deteriorating. Block is the parent company behind the merchant payment processor Square and the payment app Cash App. "Morale is probably the worst I've felt in four years," reads an employee complaint submitted to Dorsey in a recent all-hands meeting, a transcript of which was seen by WIRED. "The overarching culture at Block is crumbling."


Jack Dorsey's Block Made an AI Agent to Boost Its Own Productivity

WIRED

At a company-wide hackathon this month, developers at finance firm Block built a dizzying number of prototype tools including a database debugger, a program for identifying duplicated code, and an app that automates Bitcoin support. The sudden productivity boost was driven by Goose, an artificial intelligence agent developed by Block several months ago that can help with coding and other work like knocking together data visualizations or mocking up new product features. "We've always had really strong hack weeks, but this one was at another level," says Jackie Brosamer, who leads the AI and data platform at Block. "We have tens of ideas that we're looking to bring to production." Goose helped developers at Block to develop a new agent-to-agent communication server at the hackathon.


Re-Ex: Revising after Explanation Reduces the Factual Errors in LLM Responses

Kim, Juyeon, Lee, Jeongeun, Chang, Yoonho, Choi, Chanyeol, Kim, Junseong, Sohn, Jy-yong

arXiv.org Artificial Intelligence

Mitigating hallucination issues is a key challenge that must be overcome to reliably deploy large language models (LLMs) in real-world scenarios. Recently, various methods have been proposed to detect and revise factual errors in LLM-generated texts, in order to reduce hallucination. In this paper, we propose Re-Ex, a method for post-editing LLM-generated responses. Re-Ex introduces a novel reasoning step dubbed as the factual error explanation step. Re-Ex revises the initial response of LLMs using 3-steps : first, external tools are used to retrieve the evidences of the factual errors in the initial LLM response; next, LLM is instructed to explain the problematic parts of the response based on the gathered evidence; finally, LLM revises the initial response using the explanations provided in the previous step. In addition to the explanation step, Re-Ex also incorporates new prompting techniques to reduce the token count and inference time required for the response revision process. Compared with existing methods including FacTool, CoVE, and RARR, Re-Ex provides better detection and revision performance with less inference time and fewer tokens in multiple benchmarks.


MenatQA: A New Dataset for Testing the Temporal Comprehension and Reasoning Abilities of Large Language Models

Wei, Yifan, Su, Yisong, Ma, Huanhuan, Yu, Xiaoyan, Lei, Fangyu, Zhang, Yuanzhe, Zhao, Jun, Liu, Kang

arXiv.org Artificial Intelligence

Large language models (LLMs) have shown nearly saturated performance on many natural language processing (NLP) tasks. As a result, it is natural for people to believe that LLMs have also mastered abilities such as time understanding and reasoning. However, research on the temporal sensitivity of LLMs has been insufficiently emphasized. To fill this gap, this paper constructs Multiple Sensitive Factors Time QA (MenatQA), which encompasses three temporal factors (scope factor, order factor, counterfactual factor) with total 2,853 samples for evaluating the time comprehension and reasoning abilities of LLMs. This paper tests current mainstream LLMs with different parameter sizes, ranging from billions to hundreds of billions. The results show most LLMs fall behind smaller temporal reasoning models with different degree on these factors. In specific, LLMs show a significant vulnerability to temporal biases and depend heavily on the temporal information provided in questions. Furthermore, this paper undertakes a preliminary investigation into potential improvement strategies by devising specific prompts and leveraging external tools. These approaches serve as valuable baselines or references for future research endeavors.


Apple's head of machine learning quits after being made to come back to the office three days a week

Daily Mail - Science & tech

A senior director at Apple has quit his job in protest at the company demanding staff return to the office three days a week. Ian Goodfellow, the director of machine learning, is believed to be the most senior employee to resign so far as a result of the plan. On April 11, the company began mandating one day a week in the office - a requirement that rose to two days on May 2. By May 23, all staff had to be at their desks three days a week. A survey of Apple workers from April 13-19 found 67 percent saying they were dissatisfied with the return-to-office policy, Fortune reported. And Goodfellow, in his resignation note, said he would not do it.


Monitoring the Cryptocurrency Space with NLP and Knowledge Graphs

#artificialintelligence

Every day, millions of articles and papers are published. While there is a lot of knowledge hidden in those articles, it is virtually impossible to read all of them. Even if you only focus on a specific domain, it is still hard to find all relevant articles and read them to get valuable insights. However, there are tools that could help you avoid manual labor and extract those insights automatically. I am, of course, talking about various NLP tools and services. In this blog post, I will present a solution of how you can combine the power of NLP with knowledge graphs to extract valuable insights from relevant articles automatically.


Twitter CEO Jack Dorsey says company was 'probably way too aggressive' in banning right-wing activists

The Independent - Tech

Twitter boss Jack Dorsey said the company has been too aggressive in banning right-wing activists from the site, despite some of them apparently being connected to harassment campaigns. Mr Dorsey and his company have been repeatedly criticised over the decisions it makes around who should stay on Twitter and who should be banned. Activists on both the left and the right have accused the site of hosting extremists, and having either too strict or too weak policies on banning users from the site. Now he has taken to the Joe Rogan Experience podcast for an interview with the comedian, during which he suggested the company could be more lenient with such bans in the future. We'll tell you what's true.


Artificial Intelligence Has Got Some Explaining to Do

#artificialintelligence

During last Wednesday's congressional hearing about Twitter transparency, Twitter CEO Jack Dorsey was forced to take accountability for the damaging cultural and political effects of his company. Soft-spoken and contrite, Dorsey provided a stark contrast to Facebook's Mark Zuckerberg, who seemed more confident when he appeared before Congress in April. In the months since, collective faith in the fabric of the internet has been anything but restored; instead, consumers, politicians, and the tech companies themselves continue to grapple with the aftermath of what social platforms hath wrought. During the hearing, Representative Debbie Dingell asked Dorsey if Twitter's algorithms are able to learn from the decisions they make--like who they suggest users follow, which tweets rise to the top, and in some cases what gets flagged for violating the platform's terms of service or even who gets banned--and also if Dorsey could explain how all of this works. "Great question," Dorsey responded, seemingly excited at a line of questioning that piqued his intellectual curiosity.